165 research outputs found
Leakage-resilient coin tossing
Proceedings 25th International Symposium, DISC 2011, Rome, Italy, September 20-22, 2011.The ability to collectively toss a common coin among n parties
in the presence of faults is an important primitive in the arsenal of
randomized distributed protocols. In the case of dishonest majority, it
was shown to be impossible to achieve less than 1
r bias in O(r) rounds
(Cleve STOC ’86). In the case of honest majority, in contrast, unconditionally
secure O(1)-round protocols for generating common unbiased
coins follow from general completeness theorems on multi-party secure
protocols in the secure channels model (e.g., BGW, CCD STOC ’88).
However, in the O(1)-round protocols with honest majority, parties
generate and hold secret values which are assumed to be perfectly hidden
from malicious parties: an assumption which is crucial to proving the
resulting common coin is unbiased. This assumption unfortunately does
not seem to hold in practice, as attackers can launch side-channel attacks
on the local state of honest parties and leak information on their secrets.
In this work, we present an O(1)-round protocol for collectively generating
an unbiased common coin, in the presence of leakage on the local
state of the honest parties. We tolerate t ≤ ( 1
3
− )n computationallyunbounded
Byzantine faults and in addition a Ω(1)-fraction leakage on
each (honest) party’s secret state. Our results hold in the memory leakage
model (of Akavia, Goldwasser, Vaikuntanathan ’08) adapted to the
distributed setting.
Additional contributions of our work are the tools we introduce to
achieve the collective coin toss: a procedure for disjoint committee election,
and leakage-resilient verifiable secret sharing.National Defense Science and Engineering Graduate FellowshipNational Science Foundation (U.S.) (CCF-1018064
Topology-Hiding Computation Beyond Semi-Honest Adversaries
Topology-hiding communication protocols allow a set of parties,
connected by an incomplete network with unknown communication graph,
where each party only knows its neighbors, to construct a complete
communication network such that the network topology remains hidden
even from a powerful adversary who can corrupt parties. This
communication network can then be used to perform arbitrary tasks, for
example secure multi-party computation, in a topology-hiding manner.
Previously proposed protocols could only tolerate passive
corruption. This paper proposes protocols that can also tolerate
fail-corruption (i.e., the adversary can crash any party at
any point in time) and so-called semi-malicious corruption (i.e., the
adversary can control a corrupted party\u27s randomness), without leaking
more than an arbitrarily small fraction of a bit of information about
the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt\u2718], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small
amount of information is unavoidable, as is the need to abort the
protocol in case of failures, our protocols seem to achieve the best
possible goal in a model with fail-corruption.
Further contributions of the paper are applications of the protocol to
obtain secure MPC protocols, which requires a way to bound the
aggregated leakage when multiple small-leakage protocols are
executed in parallel or sequentially. Moreover, while previous
protocols are based on the DDH assumption, a new so-called PKCR
public-key encryption scheme based on the LWE assumption is proposed,
allowing to base topology-hiding computation on LWE. Furthermore, a
protocol using fully-homomorphic encryption achieving very low round
complexity is proposed
Securing computation against continuous leakage
30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. ProceedingsWe present a general method to compile any cryptographic algorithm into one which resists side channel attacks of the only computation leaks information variety for an unbounded number of executions. Our method uses as a building block a semantically secure subsidiary bit encryption scheme with the following additional operations: key refreshing, oblivious generation of cipher texts, leakage resilience re-generation, and blinded homomorphic evaluation of one single complete gate (e.g. NAND). Furthermore, the security properties of the subsidiary encryption scheme should withstand bounded leakage incurred while performing each of the above operations.
We show how to implement such a subsidiary encryption scheme under the DDH intractability assumption and the existence of a simple secure hardware component. The hardware component is independent of the encryption scheme secret key. The subsidiary encryption scheme resists leakage attacks where the leakage is computable in polynomial time and of length bounded by a constant fraction of the security parameter.Israel Science Foundation (710267)United States-Israel Binational Science Foundation (710613)National Science Foundation (U.S.) (6914349)Weizmann KAMAR Gran
Public-Key Encryption Schemes with Auxiliary Inputs
7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. ProceedingsWe construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems.
As an independent technical contribution, we extend the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields.National Science Foundation (U.S.) (Grant CCF-0514167)National Science Foundation (U.S.) (Grant CCF-0635297)National Science Foundation (U.S.) (Grant NSF-0729011)Israel Science Foundation (700/08)Chais Family Fellows Progra
Is Information-Theoretic Topology-Hiding Computation Possible?
Topology-hiding computation (THC) is a form of multi-party computation over an incomplete communication graph that maintains the privacy of the underlying graph topology. Existing THC protocols consider an adversary that may corrupt an arbitrary number of parties, and rely on cryptographic assumptions such as DDH.
In this paper we address the question of whether information-theoretic THC can be achieved by taking advantage of an honest majority. In contrast to the standard MPC setting, this problem has remained open in the topology-hiding realm, even for simple privacy-free functions like broadcast, and even when considering only semi-honest corruptions. We uncover a rich landscape of both positive and negative answers to the above question, showing that what types of graphs are used and how they are selected is an important factor in determining the feasibility of hiding topology information-theoretically. In particular, our results include the following.
We show that topology-hiding broadcast (THB) on a line with four nodes, secure against a single semi-honest corruption, implies key agreement. This result extends to broader classes of graphs, e.g., THB on a cycle with two semi-honest corruptions. On the other hand, we provide the first feasibility result for information-theoretic THC: for the class of cycle graphs, with a single semi-honest corruption.
Given the strong impossibilities, we put forth a weaker definition of distributional-THC, where the graph is selected from some distribution (as opposed to worst-case). We present a formal separation between the definitions, by showing a distribution for which information theoretic distributional-THC is possible, but even topology-hiding broadcast is not possible information-theoretically with the standard definition. We demonstrate the power of our new definition via a new connection to adaptively secure low-locality MPC, where distributional-THC enables parties to reuse a secret low-degree communication graph even in the face of adaptive corruptions
Circular and leakage resilient public-key encryption under subgroup indistinguishability (or: Quadratic residuosity strikes back)
30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. ProceedingsThe main results of this work are new public-key encryption schemes that, under the quadratic residuosity (QR) assumption (or Paillier’s decisional composite residuosity (DCR) assumption), achieve key-dependent message security as well as high resilience to secret key leakage and high resilience to the presence of auxiliary input information.
In particular, under what we call the subgroup indistinguishability assumption, of which the QR and DCR are special cases, we can construct a scheme that has:
• Key-dependent message (circular) security. Achieves security even when encrypting affine functions of its own secret key (in fact, w.r.t. affine “key-cycles” of predefined length). Our scheme also meets the requirements for extending key-dependent message security to broader classes of functions beyond affine functions using previous techniques of Brakerski et al. or Barak et al.
• Leakage resiliency. Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret key is given to the adversary. A proper selection of parameters allows for a “leakage rate” of (1 − o(1)) of the length of the secret key.
• Auxiliary-input security. Remains secure even if any sufficiently hard to invert (efficiently computable) function of the secret key is given to the adversary.
Our scheme is the first to achieve key-dependent security and auxiliary-input security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate (1 − o(1)) of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of Naor and Segev, using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of o(1) of the secret key length.Microsoft Researc
An Adaptive Sublinear-Time Block Sparse Fourier Transform
The problem of approximately computing the dominant Fourier coefficients of a vector quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with runtime [Hassanieh \emph{et al.}, STOC'12] and sample complexity [Indyk \emph{et al.}, FOCS'14]. These results are proved using non-adaptive algorithms, and the latter sample complexity result is essentially the best possible under the sparsity assumption alone: It is known that even adaptive algorithms must use samples [Hassanieh \emph{et al.}, STOC'12]. By {\em adaptive}, we mean being able to exploit previous samples in guiding the selection of further samples. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a -block sparse model. In this model, signal frequencies are clustered in intervals with width in Fourier space, and is the total sparsity. Signals arising in applications are often well approximated by this model with . Our main result is the first sparse FFT algorithm for -block sparse signals with a sample complexity of at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved in the works on {\em model-based compressive sensing} using random Gaussian measurements, but used runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the sample complexity bound. Interestingly, the aforementioned model-based compressive sensing result that relies on Gaussian measurements is non-adaptive, whereas our algorithm crucially uses {\em adaptivity} to achieve the improved sample complexity bound. We prove that adaptivity is in fact necessary in the Fourier setting: Any {\em non-adaptive} algorithm must use samples for the )-block sparse model, ruling out improvements over the vanilla sparsity assumption. Our main technical innovation for adaptivity is a new randomized energy-based importance sampling technique that may be of independent interest
ResponseNet: revealing signaling and regulatory networks linking genetic transcriptomic screening data
Cellular response to stimuli is typically complex and involves both regulatory and metabolic processes. Large-scale experimental efforts to identify components of these processes often comprise of genetic screening and transcriptomic profiling assays. We previously established that in yeast genetic screens tend to identify response regulators, while transcriptomic profiling assays tend to identify components of metabolic processes. ResponseNet is a network-optimization approach that integrates the results from these assays with data of known molecular interactions. Specifically, ResponseNet identifies a high-probability sub-network, composed of signaling and regulatory molecular interaction paths, through which putative response regulators may lead to the measured transcriptomic changes. Computationally, this is achieved by formulating a minimum-cost flow optimization problem and solving it efficiently using linear programming tools. The ResponseNet web server offers a simple interface for applying ResponseNet. Users can upload weighted lists of proteins and genes and obtain a sparse, weighted, molecular interaction sub-network connecting their data. The predicted sub-network and its gene ontology enrichment analysis are presented graphically or as text. Consequently, the ResponseNet web server enables researchers that were previously limited to separate analysis of their distinct, large-scale experiments, to meaningfully integrate their data and substantially expand their understanding of the underlying cellular response. ResponseNet is available at http://bioinfo.bgu.ac.il/respnet.Seventh Framework Programme (European Commission) (FP7-PEOPLE-MCA-IRG)United States-Israel Binational Science Foundation (Grant 2009323
Single-shot security for one-time memories in the isolated qubits model
One-time memories (OTM's) are simple, tamper-resistant cryptographic devices,
which can be used to implement sophisticated functionalities such as one-time
programs. Can one construct OTM's whose security follows from some physical
principle? This is not possible in a fully-classical world, or in a
fully-quantum world, but there is evidence that OTM's can be built using
"isolated qubits" -- qubits that cannot be entangled, but can be accessed using
adaptive sequences of single-qubit measurements.
Here we present new constructions for OTM's using isolated qubits, which
improve on previous work in several respects: they achieve a stronger
"single-shot" security guarantee, which is stated in terms of the (smoothed)
min-entropy; they are proven secure against adversaries who can perform
arbitrary local operations and classical communication (LOCC); and they are
efficiently implementable.
These results use Wiesner's idea of conjugate coding, combined with
error-correcting codes that approach the capacity of the q-ary symmetric
channel, and a high-order entropic uncertainty relation, which was originally
developed for cryptography in the bounded quantum storage model.Comment: v2: to appear in CRYPTO 2014. 21 pages, 3 figure
- …